3 research outputs found

    Investigating Perceptual Congruence Between Data and Display Dimensions in Sonification

    Get PDF
    The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback

    Sound for the exploration of space physics data

    Get PDF
    Current analysis techniques for space physics 2D numerical data are based on scruti-nising the data with the eyes. Space physics data sets acquired from the natural lab of the interstellar medium may contain events that may be masked by noise making it difficult to identify. This thesis presents research on the use of sound as an adjunct to current data visualisation techniques to explore, analyse and augment signatures in space physics data. This research presents a new sonification technique to decom-pose a space physics data set into different components (frequency, oscillatory modes, etc…) of interest, and its use as an adjunct to data visualisation to explore and analyse space science data sets which are characterised by non-linearity (a system which does not satisfy the superposition principle, or whose output is not propor-tional to its input). Integrating aspects of multisensory perceptualization, human at tention mechanisms, the question addressed by this dissertation is: Does sound used as an adjunct to current data visualisation, augment the perception of signatures in space physics data masked by noise? To answer this question, the following additional questions had to be answered: a) Is sound used as an adjunct to visualisation effective in increasing sensi-tivity to signals occurring at attended, unattended, unexpected locations, extended in space, when the occurrence of the signal is in presence of a dynamically changing competing cognitive load (noise), that makes the signal visually ambiguous? b) How can multimodal perceptualization (sound as an adjunct to visualisa-tion) and attention control mechanisms, be combined to help allocate at-tention to identify visually ambiguous signals? One aim of these questions is to investigate the effectiveness of the use of sound to-gether with visual display to increase sensitivity to signal detection in presence of visual noise in the data as compared to visual display only. Radio, particle, wave and high energy data is explored using a sonification technique developed as part of this research. The sonification technique developed as part of this research, its application and re-sults are numerically validated and presented. This thesis presents the results of three experiments and results of a training experiment. In all the 4 experiments, the volun-teers were using sound as an adjunct to data visualisation to identify changes in graphical visual and audio representations and these results are compared with those of using audio rendering only and visual rendering only. In the first experiment audio rendering did not result in significant benefits when used alone or with a visual display. With the second and third experiments, the audio as an adjunct to visual rendering became significant when a fourth cue was added to the spectra. The fourth cue con-sisted of a red line sweeping across the visual display at the rate the sound was played, to synchronise the audio and visual present. The results prove that a third congruent multimodal stimulus in synchrony with the sound helps space scientists identify events masked by noise in 2D data. Results of training experiments are reported
    corecore